15 research outputs found

    Evaluation of low-cost depth cameras for agricultural applications

    Get PDF
    Low-cost depth-cameras have been used in many agricultural applications with reported advantages of low cost, reliability and speed of measurement. However, some problems were also reported and seem to be technology- related, so understanding the limitations of each type of depth camera technology could provide a basis for technology selection and the development of research involving its use. The cameras use one or a combination of two of the three available technologies: structured light, time-of-flight (ToF), and stereoscopy. The objectives were to evaluate these different technologies for depth sensing, including measuring accuracy and repeatability of distance data and measurements at different positions within the image, and cameras usefulness in indoor and outdoor settings. Then, cameras were tested in a swine facility and in a corn field. Five different cameras were used: (1) Microsoft Kinect v.1, (2) Microsoft Kinect v.2, (3) Intel® RealSenseTM Depth Camera D435, (4) ZED Stereo Camera (StereoLabs), and (5) CamBoard Pico Flexx (PMD Technologies). Results indicate that there were significant camera to camera differences for ZED Stereo Camera and Kinect v.1 camera (p \u3c 0.05). All cameras showed an increase in the standard deviation as the distance between camera and object increased; however, the Intel RealSense camera had a larger increase. Time-of-flight cameras had the smallest error between different sizes of objects. Time-of-flight cameras had non-readable zones on the corners of the images. The results indicate that the ToF technology is the best to be used for indoor applications and stereoscopy is the best technology for outdoor applications

    Evaluation of low-cost depth cameras for agricultural applications

    Get PDF
    Low-cost depth-cameras have been used in many agricultural applications with reported advantages of low cost, reliability and speed of measurement. However, some problems were also reported and seem to be technology related, so understanding the limitations of each type of depth camera technology could provide a basis for technology selection and the development of research involving its use. The cameras use one or a combination of two of the three available technologies: structured light, time-of-flight (ToF), and stereoscopy. The objectives were to evaluate these different technologies for depth sensing, including measuring accuracy and repeatability of distance data and measurements at different positions within the image, and cameras usefulness in indoor and outdoor settings. Then, cameras were tested in a swine facility and in a corn field. Five different cameras were used: (1) Microsoft Kinect v.1, (2) Microsoft Kinect v.2, (3) Intel® RealSense™ Depth Camera D435, (4) ZED Stereo Camera (StereoLabs), and (5) CamBoard Pico Flexx (PMD Technologies). Results indicate that there were significant camera to camera differences for ZED Stereo Camera and Kinect v.1 camera (p \u3c 0.05). All cameras showed an increase in the standard deviation as the distance between camera and object increased; however, the Intel RealSense camera had a larger increase. Time-of-flight cameras had the smallest error between different sizes of objects. Time-of-flight cameras had non-readable zones on the corners of the images. The results indicate that the ToF technolog

    Ground and Aerial Robots for Agricultural Production: Opportunities and Challenges

    Get PDF
    Crop and animal production techniques have changed significantly over the last century. In the early 1900s, animal power was replaced by tractor power that resulted in tremendous improvements in field productivity, which subsequently laid foundation for mechanized agriculture. While precision agriculture has enabled site-specific management of crop inputs for improved yields and quality, precision livestock farming has boosted efficiencies in animal and dairy industries. By 2020, highly automated systems are employed in crop and animal agriculture to increase input efficiency and agricultural output with reduced adverse impact on the environment. Ground and aerial robots combined with artificial intelligence (AI) techniques have potential to tackle the rising food, fiber, and fuel demands of the rapidly growing population that is slated to be around 10 billion by the year 2050. This Issue Paper presents opportunities provided by ground and aerial robots for improved crop and animal production, and the challenges that could potentially limit their progress and adoption. A summary of enabling factors that could drive the deployment and adoption of robots in agriculture is also presented along with some insights into the training needs of the workforce who will be involved in the next-generation agriculture

    Processamento de imagens em profundidade para melhora do desempenho de matrizes suínas por meio da detecção precoce de claudicação e de alterações no escore de condição corporal

    No full text
    The observation, control and the maintenance of the physical condition of sows in acceptable levels are critical to maintain the animal welfare and production in appropriate standards. Lameness causes pain making locomotion difficult. However, lameness is a common disorder in sows that causes negative impacts in both welfare and production. Since the animals that demonstrate this problem, have a smaller number of born-alive piglets, fewer gestation per year and are removed from the herd at a younger age than the ideal. In addition, it is industry practice to limit feed sows to ensure that they remain at an ideal condition score. It is known that, during gestation, each sow should receive a different amount of food according to its body condition. Underweight animals have nutritional deficiency and lower number of piglets per litter. On the other hand, overweight sows have an abnormal development of mammary glands, reducing the amount of milk produced during lactation, causing economic losses. However, moving sows to group gestation makes it difficult to monitor condition score in gestating sows. Both the detection of lameness and the classification of body condition are currently assessed using subjective methods, which is time consuming and difficult to accurately complete. Therefore, the early recognition of animals that present physical condition outside the standards is important to prevent production losses caused by both the aggravation of the conditions presented and the impact on the animals\' welfare. The objective of this project is to obtain three characteristics (body condition score, mass and backfat thickness) through depth images, that proved to be effective on the acquisition of these features in other animals (boars and cows). The second objective is to develop a method for early detection of lameness using the kinematic approach, that has been generating good results and which difficulties have the potential to be reduced by using depth images instead of the method of reflective markers currently used. To predict body condition, a multiple linear regression was obtained using the minor axis of the ellipse fitted around sow\'s body, the width at shoulders, and the angle, of the last rib\'s curvature. To predict backfat, a multiple linear regression was performed using the height of last rib\'s curvature, the perimeter of sow\'s body, the major axis of the ellipse fitted around sow\'s body, the length from snout to rump, and the predicted body condition score. It was possible to obtain the body mass with a simple linear regression using the projected volume of the sows\' body. For lameness detection, three models presented the best accuracy (76.9%): linear discriminant analysis, fine 1-nearest neighbor, and weighted 10-nearest neighbors. The input variables used on the models were obtained from depth videos (number, time, and length of steps for each of the four regions analyzed - left and right shoulders and left and right hips; total walk time; and number of local maxima for head region). As a result of these studies, it has been demonstrated that a depth camera can be used to automate the weight, condition score, backfat thickness, and lameness acquisition/detection in gestating and lactating sows.A observação, o controle e a manutenção das condições físicas de matrizes suínas em níveis aceitáveis são fundamentais para manter o bem-estar animal e a produção em padrões adequados. A claudicação causa dor e dificuldade de locomoção e, no entanto, é uma desordem comum em matrizes suínas que, além do impacto negativo no bem-estar, gera, também, grandes impactos na produção, uma vez que os animais que demonstram esse problema, apresentam um menor número de leitões nascidos vivos, menor número de partos por ano e são removidas do rebanho a uma idade mais jovem do que a ideal. Sabe-se, ainda, que, durante a gestação, cada matriz deve receber uma quantidade de ração diferenciada de acordo com sua condição corporal. Animais abaixo do peso apresentam deficiência nutricional e menor número de leitões nascidos por ninhada. Já as matrizes com excesso de peso apresentam um desenvolvimento anormal das glândulas mamárias, reduzindo a quantidade de leite produzida durante a lactação, acarretando em perdas econômicas. Tanto a detecção da claudicação quanto a classificação da condição corporal são feitos por meios subjetivos e dependentes da opinião pessoal do tratador, o que pode gerar divergências entre as classificações dadas por cada indivíduo. Destaca-se, portanto, a importância do reconhecimento precoce de animais que apresentam condições físicas fora dos padrões exigidos, visando a prevenção de perdas produtivas causadas tanto pelo agravamento das condições apresentadas quanto pelo grande impacto no bem-estar dos animais. Tendo-se isso em vista, o presente trabalho visou obter três características (escore de condição corporal, massa corporal e espessura de toucinho) por meio de imagens em profundidade, que se mostraram eficazes na obtenção dessas características em outros animais (suínos machos não- castrados e vacas leiteiras). Além disso, buscou-se desenvolver um método para a detecção precoce de claudicação em matrizes suínas, utilizando-se a abordagem da cinemática dos animais, que vem dando bons resultados e cujas dificuldades têm potencial para serem sanadas por meio do uso de imagens em profundidade em vez do método de marcadores reflexivos utilizado atualmente. Para predizer a condição corporal, uma regressão linear múltipla foi obtida usando o menor eixo da elipse ajustada ao redor do corpo da matriz suína, a largura dos ombros e o ângulo da curvatura da última costela. Para predizer a espessura de toucinho, foi realizada uma regressão linear múltipla usando a altura curvatura da última da costela, o perímetro do corpo da matriz, o maior eixo da elipse ajustada, o comprimento do focinho à cauda e o escore predito da condição corporal. Foi possível obter a massa corporal com uma regressão linear simples usando o volume projetado do corpo das matrizes. Para detecção de claudicação, três modelos apresentaram a melhor precisão (76,9%): análise discriminante linear, 1 vizinho mais próximo e 10 vizinhos mais próximos. As variáveis de entrada utilizadas nos modelos foram obtidas a partir de vídeos em profundidade (número, tempo e comprimento de passos para cada uma das quatro regiões analisadas-ombros esquerdo e direito e quadris esquerdo e direito; tempo total de caminhada e número de máximos locais para a região da cabeça). Como resultado desses estudos, observou-se que câmeras em profundidade podem ser utilizadas na automação de medidas de peso, condição corporal, espessura de toucinho e claudicação de matrizes suínas

    Obtenção automática da massa de suínos em crescimento e terminação por meio da análise de imagens em profundidade

    No full text
    A method of continuously monitoring weight would aid producers by ensuring all pigs are gaining weight and increasing the precision of marketing pigs thus saving money. Electronically monitoring weight without moving the pigs to the scale would eliminate a stress-generating source. Therefore, the development of methods for monitoring the physical conditions of animals from a distance appears as a necessity for obtaining data with higher quality. In pigs\' production, animals\' weighing is a practice that represents an important role in the control of the factors that affect the performance of the herd and it is an important factor on the production\'s monitoring. Therefore, this research aimed to extract weight data of pigs through depth images. First, a validation of 5 Kinect &reg; depth sensors was completed to understand the accuracy of the depth sensors. In addition, equations were generated to correct the dimensions\' data (length, area and volume) provided by these sensors for any distance between the sensor and the animals. Depth images and weights of finishing pigs (gilts and barrows) of three commercial lines (Landrace, Duroc and Yorkshire based) were acquired. Then, the images were analyzed with the MATLAB software (2016a). The pigs on the images were selected by depth differences and their volumes were calculated and then adjusted using the correction equation developed. Also, pigs\' dimensions were acquired for updating existing data. Curves of weight versus corrected volumes and corrected dimensions versus weight were adjusted. Equations for weight predictions through volume were adjusted for gilts and barrows and for each of the three commercial lines used. A reduced equation for all the data, without considering differences between sexes and genetic lines was also adjusted and compared with the individual equations using the Efroymson\'s algorithm. The result showed that there was no significant difference between the reduced equation and the individual equations for barrows and gilts (p<0.05), and the global equation was also no different than individual equations for each of the three sire lines (p<0.05). The global equation can predict weights from a depth sensor with an R2 of 0,9905. Therefore, the results of this study show that the depth sensor would be a reasonable approach to continuously monitor weights.Um método de monitoramento contínuo da massa corporal de suínos auxiliaria os produtores, assegurando que todos os animais estão ganhando massa e aumentando a sua precisão de comercialização, reduzindo-se perdas. Obter eletronicamente a massa corporal sem mover os animais para a balança eliminaria uma fonte geradora de estresse. Portanto, o desenvolvimento de métodos para monitorar as condições físicas dos animais à distância se mostra necessário para a obtenção de dados com maior qualidade. Na produção de suínos, a pesagem dos animais é uma prática que representa um papel importante no controle dos fatores que afetam o desempenho do rebanho e o monitoramento da produção. Portanto, esta pesquisa teve como objetivo extrair, automaticamente, dados de massa de suínos por meio de imagens em profundidade. Foi feita, primeiramente, uma validação de 5 sensores de profundidade Kinect &reg; para compreender seu comportamento. Além disso, foram geradas equações para corrigir os dados de dimensões (comprimento, área e volume) fornecidos por estes sensores para qualquer distância entre o sensor e os animais. Foram obtidas imagens de profundidade e massas corporais de suínos e crescimento e terminação (fêmeas e machos castrados) de três linhagens comerciais (Landrace, Duroc e Yorkshire). Em seguida, as imagens foram analisadas com o software MATLAB (2016a). Os animais nas imagens foram selecionados por diferenças de profundidade e seus volumes foram calculados e depois ajustados utilizando a equação de correção desenvolvida. Foram coletadas, ainda, dimensões dos animais para atualização de dados existentes. Curvas de massa versus volumes corrigidos e de dimensões corrigidas versus massa, foram ajustadas. Equações para predição de massa a partir do volume foram ajustadas para os dois sexos e para as três linhagens comerciais. Uma equação reduzida, sem considerar as diferenças entre sexos e linhagens, também foi ajustada e comparada com as equações individuais utilizando o algoritmo de Efroymson. O resultado mostrou que não houve diferença significativa entre a equação reduzida e as equações individuais tanto para sexo (p <0,05), quanto para linhagens (p <0,05). A equação global pode predizer massas a partir do volume obtido com o sensor, com um R2 de 0,9905. Portanto, os resultados deste estudo mostram que o sensor de profundidade é uma abordagem razoável para monitorar as massas dos animais

    Obtenção automática da massa de suínos em crescimento e terminação por meio da análise de imagens em profundidade

    No full text
    A method of continuously monitoring weight would aid producers by ensuring all pigs are gaining weight and increasing the precision of marketing pigs thus saving money. Electronically monitoring weight without moving the pigs to the scale would eliminate a stress-generating source. Therefore, the development of methods for monitoring the physical conditions of animals from a distance appears as a necessity for obtaining data with higher quality. In pigs\' production, animals\' weighing is a practice that represents an important role in the control of the factors that affect the performance of the herd and it is an important factor on the production\'s monitoring. Therefore, this research aimed to extract weight data of pigs through depth images. First, a validation of 5 Kinect &reg; depth sensors was completed to understand the accuracy of the depth sensors. In addition, equations were generated to correct the dimensions\' data (length, area and volume) provided by these sensors for any distance between the sensor and the animals. Depth images and weights of finishing pigs (gilts and barrows) of three commercial lines (Landrace, Duroc and Yorkshire based) were acquired. Then, the images were analyzed with the MATLAB software (2016a). The pigs on the images were selected by depth differences and their volumes were calculated and then adjusted using the correction equation developed. Also, pigs\' dimensions were acquired for updating existing data. Curves of weight versus corrected volumes and corrected dimensions versus weight were adjusted. Equations for weight predictions through volume were adjusted for gilts and barrows and for each of the three commercial lines used. A reduced equation for all the data, without considering differences between sexes and genetic lines was also adjusted and compared with the individual equations using the Efroymson\'s algorithm. The result showed that there was no significant difference between the reduced equation and the individual equations for barrows and gilts (p<0.05), and the global equation was also no different than individual equations for each of the three sire lines (p<0.05). The global equation can predict weights from a depth sensor with an R2 of 0,9905. Therefore, the results of this study show that the depth sensor would be a reasonable approach to continuously monitor weights.Um método de monitoramento contínuo da massa corporal de suínos auxiliaria os produtores, assegurando que todos os animais estão ganhando massa e aumentando a sua precisão de comercialização, reduzindo-se perdas. Obter eletronicamente a massa corporal sem mover os animais para a balança eliminaria uma fonte geradora de estresse. Portanto, o desenvolvimento de métodos para monitorar as condições físicas dos animais à distância se mostra necessário para a obtenção de dados com maior qualidade. Na produção de suínos, a pesagem dos animais é uma prática que representa um papel importante no controle dos fatores que afetam o desempenho do rebanho e o monitoramento da produção. Portanto, esta pesquisa teve como objetivo extrair, automaticamente, dados de massa de suínos por meio de imagens em profundidade. Foi feita, primeiramente, uma validação de 5 sensores de profundidade Kinect &reg; para compreender seu comportamento. Além disso, foram geradas equações para corrigir os dados de dimensões (comprimento, área e volume) fornecidos por estes sensores para qualquer distância entre o sensor e os animais. Foram obtidas imagens de profundidade e massas corporais de suínos e crescimento e terminação (fêmeas e machos castrados) de três linhagens comerciais (Landrace, Duroc e Yorkshire). Em seguida, as imagens foram analisadas com o software MATLAB (2016a). Os animais nas imagens foram selecionados por diferenças de profundidade e seus volumes foram calculados e depois ajustados utilizando a equação de correção desenvolvida. Foram coletadas, ainda, dimensões dos animais para atualização de dados existentes. Curvas de massa versus volumes corrigidos e de dimensões corrigidas versus massa, foram ajustadas. Equações para predição de massa a partir do volume foram ajustadas para os dois sexos e para as três linhagens comerciais. Uma equação reduzida, sem considerar as diferenças entre sexos e linhagens, também foi ajustada e comparada com as equações individuais utilizando o algoritmo de Efroymson. O resultado mostrou que não houve diferença significativa entre a equação reduzida e as equações individuais tanto para sexo (p <0,05), quanto para linhagens (p <0,05). A equação global pode predizer massas a partir do volume obtido com o sensor, com um R2 de 0,9905. Portanto, os resultados deste estudo mostram que o sensor de profundidade é uma abordagem razoável para monitorar as massas dos animais

    Evaluation of low-cost depth cameras for agricultural applications

    Get PDF
    Low-cost depth-cameras have been used in many agricultural applications with reported advantages of low cost, reliability and speed of measurement. However, some problems were also reported and seem to be technology related, so understanding the limitations of each type of depth camera technology could provide a basis for technology selection and the development of research involving its use. The cameras use one or a combination of two of the three available technologies: structured light, time-of-flight (ToF), and stereoscopy. The objectives were to evaluate these different technologies for depth sensing, including measuring accuracy and repeatability of distance data and measurements at different positions within the image, and cameras usefulness in indoor and outdoor settings. Then, cameras were tested in a swine facility and in a corn field. Five different cameras were used: (1) Microsoft Kinect v.1, (2) Microsoft Kinect v.2, (3) Intel® RealSense™ Depth Camera D435, (4) ZED Stereo Camera (StereoLabs), and (5) CamBoard Pico Flexx (PMD Technologies). Results indicate that there were significant camera to camera differences for ZED Stereo Camera and Kinect v.1 camera (p \u3c 0.05). All cameras showed an increase in the standard deviation as the distance between camera and object increased; however, the Intel RealSense camera had a larger increase. Time-of-flight cameras had the smallest error between different sizes of objects. Time-of-flight cameras had non-readable zones on the corners of the images. The results indicate that the ToF technolog

    Impact of housing environment and management on pre-/post-weaning piglet productivity

    No full text
    The complex environment surrounding young pigs reared in intensive housing systems directly influences their productivity and livelihood. Much of the seminal literature utilized housing and husbandry practices that have since drastically evolved through advances in genetic potential, nutrition, health, and technology. This review focuses on the environmental interaction and responses of pigs during the first 8 wk of life, separated into pre-weaning (creep areas) and post-weaning (nursery or wean-finish) phases. Further, a perspective on instrumentation and precision technologies for animal-based (physiological and behavioral) and environmental measures documents current approaches and future possibilities. A warm microclimate for piglets during the early days of life, especially the first 12 h, is critical. While caretaker interventions can mitigate the extent of hypothermia, low birth weight remains a dominant risk factor for mortality. Post-weaning, the thermoregulation capabilities have improved, but subsequent transportation, nutritional, and social stressors enhance the requisite need for a warm, low draft environment with the proper flooring. A better understanding of the individual environmental factors that affect young pigs as well as the creation of comprehensive environment indices or improved, non-contact sensing technology is needed to better evaluate and manage piglet environments. Such enhanced understanding and evaluation of pig–environment interaction could lead to innovative environmental control and husbandry interventions to foster healthy and productive pigs.This is the version of record for the article Ramirez, Brett C., Morgan D. Hayes, Isabella CFS Condotta, and Suzanne M. Leonard. "Impact of housing environment and management on pre-/post-weaning piglet productivity." Journal of Animal Science 100, no. 6 (2022): skac142. Available online at DOI: 10.1093/jas/skac142. Copyright 2022 The Author(s). Attribution 4.0 International (CC BY 4.0). Posted with permission

    Application of fuzzy logic on correlation between piglet’s surface temperature and thermal comfort for Piracicaba

    No full text
    The constant influence of the human-being in animal management activities, besides increasing the cost of production, becomes a source of animal stress; which generates the most important signs of the process, acting as a bio-sensor that needs to be measured in a continuous and direct way. From the many available options to obtain an animal temperature parameter, the measurement of external temperature proves to be a less invasive method. Fuzzy system, based on fuzzy logic, works with inaccurate information and converts them into a mathematical language with easy computational implementation. Therefore, this work aims to use fuzzy logic as a tool for assessing the degree of swine thermal comfort, using variables data entry related to the environment and the animal. For this purpose, a simulation using Microsoft Excel software was performed with the possible environmental conditions of the region of Piracicaba (Dry Bulb Temperature - Ts and Relative Humidity - UR), interacting with the local altitude (z) and potential tympanic piglets temperatures (TiF) using equations 1 and 2 adapted from Mostaço (2014). After performing the simulations, Rectal Temperature (TR) and respiratory frequency (FR) data obtained were crossed with FR, TR and critical temperatures for condition of thermal comfort of piglets (5th to 8th week) values, proposed by Mostaço (2014) through the fuzzy logical toolbox of MATLAB software. Ts, UR and TiF were used as variables, the variable output obtained was the Thermal Comfort (TC), rated as good (situation of comfort), bad (stress due to excess or lack of heat) and very bad (death of the animal). Membership functions were generated from data, correlating all the variables that were subdivided into relevant groups; furthermore, a system of rules was generate, whereby it was possible to simulate inputs of environmental and animal characteristics, resulting the probable condition of the animal thermal comfort. The study concluded that the application of fuzzy logic provided a model of easy interpretation and adaptability, on achieving a recognized comfort degree for piglets. It proved to be able to change the rules system and membership functions, to attend other locations and other phases of growth; which indicates the possibility of developing a model that determines the occurrence of heat stress from swine body temperature and environment data, assisting in decision making and therefore improving the welfare quality and reducing production costs

    Estimating body weight and body condition score of mature beef cows using depth images

    Get PDF
    Obtaining accurate body weight (BW) is crucial for management decisions yet can be a challenge for cow–calf producers. Fast-evolving technologies such as depth sensing have been identified as low-cost sensors for agricultural applications but have not been widely validated for U.S. beef cattle. This study aimed to (1) estimate the body volume of mature beef cows from depth images, (2) quantify BW and metabolic weight (MBW) from image-projected body volume, and (3) classify body condition scores (BCS) from image-obtained measurements using a machine-learning-based approach. Fifty-eight crossbred cows with a mean BW of 410.0 ± 60.3 kg and were between 4 and 6 yr of age were used for data collection between May and December 2021. A low-cost, commercially available depth sensor was used to collect top-view depth images. Images were processed to obtain cattle biometric measurements, including MBW, body length, average height, maximum body width, dorsal area, and projected body volume. The dataset was partitioned into training and testing datasets using an 80%:20% ratio. Using the training dataset, linear regression models were developed between image-projected body volume and BW measurements. Results were used to test BW predictions for the testing dataset. A machine-learning-based multivariate analysis was performed with 29 algorithms from eight classifiers to classify BCS using multiple inputs conveniently obtained from the cows and the depth images. A feature selection algorithm was performed to rank the relevance of each input to the BCS. Results demonstrated a strong positive correlation between the image-projected cow body volume and the measured BW (r = 0.9166). The regression between the cow body volume and the measured BW had a co-efficient of determination (R2) of 0.83 and a 19.2 ± 13.50 kg mean absolute error (MAE) of prediction. When applying the regression to the testing dataset, an increase in the MAE of the predicted BW (22.7 ± 13.44 kg) but a slightly improved R2 (0.8661) was noted. Among all algorithms, the Bagged Tree model in the Ensemble class had the best performance and was used to classify BCS. Classification results demonstrate the model failed to predict any BCS lower than 4.5, while it accurately classified the BCS with a true prediction rate of 60%, 63.6%, and 50% for BCS between 4.75 and 5, 5.25 and 5.5, and 5.75 and 6, respectively. This study validated using depth imaging to accurately predict BW and classify BCS of U.S. beef cow herds
    corecore